48,797 research outputs found

    Option-implied information and predictability of extreme returns : [Version 28 Januar 2013]

    Get PDF
    We study whether prices of traded options contain information about future extreme market events. Our option-implied conditional expectation of market loss due to tail events, or tail loss measure, predicts future market returns, magnitude, and probability of the market crashes, beyond and above other option-implied variables. Stock-specific tail loss measure predicts individual expected returns and magnitude of realized stock-specific crashes in the cross-section of stocks. An investor that cares about the left tail of her wealth distribution benefits from using the tail loss measure as an information variable to construct managed portfolios of a risk-free asset and market index

    Mode-Seeking on Hypergraphs for Robust Geometric Model Fitting

    Full text link
    In this paper, we propose a novel geometric model fitting method, called Mode-Seeking on Hypergraphs (MSH),to deal with multi-structure data even in the presence of severe outliers. The proposed method formulates geometric model fitting as a mode seeking problem on a hypergraph in which vertices represent model hypotheses and hyperedges denote data points. MSH intuitively detects model instances by a simple and effective mode seeking algorithm. In addition to the mode seeking algorithm, MSH includes a similarity measure between vertices on the hypergraph and a weight-aware sampling technique. The proposed method not only alleviates sensitivity to the data distribution, but also is scalable to large scale problems. Experimental results further demonstrate that the proposed method has significant superiority over the state-of-the-art fitting methods on both synthetic data and real images.Comment: Proceedings of the IEEE International Conference on Computer Vision, pp. 2902-2910, 201

    Maximum Entropy, Word-Frequency, Chinese Characters, and Multiple Meanings

    Full text link
    The word-frequency distribution of a text written by an author is well accounted for by a maximum entropy distribution, the RGF (random group formation)-prediction. The RGF-distribution is completely determined by the a priori values of the total number of words in the text (M), the number of distinct words (N) and the number of repetitions of the most common word (k_max). It is here shown that this maximum entropy prediction also describes a text written in Chinese characters. In particular it is shown that although the same Chinese text written in words and Chinese characters have quite differently shaped distributions, they are nevertheless both well predicted by their respective three a priori characteristic values. It is pointed out that this is analogous to the change in the shape of the distribution when translating a given text to another language. Another consequence of the RGF-prediction is that taking a part of a long text will change the input parameters (M, N, k_max) and consequently also the shape of the frequency distribution. This is explicitly confirmed for texts written in Chinese characters. Since the RGF-prediction has no system-specific information beyond the three a priori values (M, N, k_max), any specific language characteristic has to be sought in systematic deviations from the RGF-prediction and the measured frequencies. One such systematic deviation is identified and, through a statistical information theoretical argument and an extended RGF-model, it is proposed that this deviation is caused by multiple meanings of Chinese characters. The effect is stronger for Chinese characters than for Chinese words. The relation between Zipf's law, the Simon-model for texts and the present results are discussed.Comment: 15 pages, 10 figures, 2 table
    • …
    corecore